-
-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use DifferentiationInterface for sparse AD #468
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Congrats on the adaptation and thank you for trusting me with this!
I've never used LinearSolve before so take my comments with a grain of salt, but I find it needlessly complicated to mix the SparseDiffTools legacy syntax with the new sparse API of ADTypes. Do you think it is a problem?
This preserves support for structured matrices. Once the support for that lands in SparseMatrixColorings we can just delete that check and everything will just work. |
To clarify the rationale behind marking Previously we had sparse backends (simple wrapper over dense backend) and we had sparsity specification inside the problem ( Now what does the new-API look like? Users always provide a dense_ad to the solver. If the function has sparsity information in the form of a detector/colorvec/prototype (basically any hint that the jacobian is sparse) we construct a AutoSparse backend using that information. From DI's perspective it always sees the final autodiff that we constructed and not the one user passed in. |
Right, so essentially you reconstruct an
That seems reasonable for now, but it might lead to a breaking change when we swing back to full |
Just to clarify, with structured matrices, there are two things we can optimize:
At the moment, if I understand correctly:
I'll try to come up with a prototype putting everything in SparseMatrixColorings.jl |
On this branch you have optimized coloring and decompression for |
I will stack a PR on top of this and try it out |
What do you mean by full I am generally against having 2 APIs to accomplish the exact same thing unless each API provides additional disjoint benefits. If you are referring to the coloring_algorithm, we can always rename @ChrisRackauckas what do you think about this new API proposal for v4? |
I meant that for users who pass an If you want to deprecate this way of handling sparsity, I would almost prefer throwing an error if someone gives you an |
Yes I agree. The function currently throws a depwarn, that will become an error in v4 |
I added support for |
What else would you need to drop SparseDiffTools completely? In terms of functionality it should be all there, performance may still lag a bit behind but honestly I'm not even sure. |
see the other PR. tests pass locally for me, so we can go ahead and remove sparsedifftools. I will make some additional pushes to remove sparse diff tools from the tests and docs |
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Added a big table to clarify the selection mechanism https://github.com/SciML/NonlinearSolve.jl/blob/ap/sparse_di/docs/src/basics/sparsity_detection.md |
Is there anything we should be doing to pass down the sparsity info that we get down to LinearSolve? It seems like knowing that should be able to make the solve faster. |
We are giving LinearSolve the exact matrix it needs to solve |
We assume for |
Yeah that goes without saying for me but if you prepare DI's Jacobian with a given sparsity pattern, you'll get incorrect outputs if the sparsity changes between points |
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fair warning: you're probably going to see an epic slowdown when computing sparse Jacobians with FiniteDiff. The reason is that, in DI, sparse Jacobians rely on pushforwards, and I don't have a native pushforward operator in FiniteDiff so I make do with a derivative closure. |
Can a native pushforward be added to FiniteDiff? it seems like it should be pretty straightforward... |
Knock yourselves out! I'll happily use it. |
In the meantime, maybe I'll try to use the sparse Jacobian inside FiniteDiff as a workaround |
@avik-pal with the very newest versions of DifferentiationInterface (0.6.4) and SparseMatrixColorings (0.4.4), your structured tests from #470 should also pass. |
The only part where we don't use DI is structured Jacobians. That still uses SparseDiffTools
TODO: